Adaptive mesh refinement (AMR) is necessary for efficient finite element simulations of complex physical phenomenon, as it allocates limited computational budget based on the need for higher or lower resolution, which varies over space and time. We present a novel formulation of AMR as a fully-cooperative Markov game, in which each element is an independent agent who makes refinement and de-refinement choices based on local information. We design a novel deep multi-agent reinforcement learning (MARL) algorithm called Value Decomposition Graph Network (VDGN), which solves the two core challenges that AMR poses for MARL: posthumous credit assignment due to agent creation and deletion, and unstructured observations due to the diversity of mesh geometries. For the first time, we show that MARL enables anticipatory refinement of regions that will encounter complex features at future times, thereby unlocking entirely new regions of the error-cost objective landscape that are inaccessible by traditional methods based on local error estimators. Comprehensive experiments show that VDGN policies significantly outperform error threshold-based policies in global error and cost metrics. We show that learned policies generalize to test problems with physical features, mesh geometries, and longer simulation times that were not seen in training. We also extend VDGN with multi-objective optimization capabilities to find the Pareto front of the tradeoff between cost and error.
translated by 谷歌翻译
在这项工作中,我们重新审视标准自适应有限元方法(AFEM)中做出的标记决定。经验表明,na \“ {i} ve标记策略会导致对自适应网格改进的计算资源的效率低下。因此,实践中使用AFEM通常涉及临时或耗时的离线参数调整来设置适当的参数对于标记子例程。为了解决这些实际问题,我们将AMR作为马尔可夫决策过程,在该过程中可以在运行时选择完善参数,而无需专家用户进行预先调整。在此新范式中,还可以通过标记策略自适应地选择细化参数,该标记策略可以使用强化学习中的方法进行优化。我们使用泊松方程来证明我们在$ h $ - 和$ hp $ - $ $ - 重新计算基准问题上的技术,我们的实验表明,这表明我们的实验表明对于许多古典AFEM应用程序,尚未发现卓越的标记策略。此外,这项工作的意外观察是,对一个PDE家族进行培训的标记政策是有时的MES足够强大,可以在训练家庭之外的问题上表现出色。为了进行插图,我们表明,在只有一个重新入口的2D域中训练的简单$ HP $投资政策可以在更复杂的2D域甚至3D域中部署,而没有大幅度的性能损失。为了复制和更广泛的采用,我们伴随着这项工作,并采用了我们方法的开源实施。
translated by 谷歌翻译